Name | Version | Summary | date |
shap-explainer |
0.1.5 |
SHAP explainer with ChatGPT API |
2024-11-29 07:20:51 |
fgclustering |
1.1.1 |
Forest-Guided Clustering - Explainability method for Random Forest models. |
2024-11-19 17:13:43 |
shapiq |
1.1.1 |
Shapley Interactions for Machine Learning |
2024-11-13 11:36:24 |
python-define |
3.0.0 |
An AI-powered command-line linguistics assistant |
2024-10-22 06:18:37 |
explainableai |
0.10 |
A comprehensive package for Explainable AI and model interpretation |
2024-10-13 06:48:03 |
model-alignment |
0.2 |
Model Alignment: Aligning prompts to human preferences through natural language feedback |
2024-10-07 17:26:47 |
quanda |
0.0.2 |
Toolkit for quantitative evaluation of data attribution methods in PyTorch. |
2024-10-05 14:15:29 |
openvino-xai |
1.1.0 |
OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models |
2024-10-02 08:27:22 |
xai4mri |
0.0.1 |
xai4mri is designed for advanced MRI analysis using deep learning combined with explainable A.I. (XAI). |
2024-09-26 23:58:43 |
aequitas-core |
2.1.1 |
Aequitas core library |
2024-09-26 15:26:05 |
dimlpfidex |
1.0.1 |
Discretized Interpretable Multi Layer Perceptron (DIMLP) and related algorithms |
2024-09-19 09:33:20 |
mlxplain |
1.0.4 |
An open platform for accelerating the development of eXplainable AI systems |
2024-09-04 13:59:03 |
rules-extraction |
0.1.4 |
Rules extraction for eXplainable AI |
2024-08-29 12:39:23 |
fair-mango |
0.1.1 |
Explore your AI model's fairness |
2024-07-23 15:10:00 |
xai-feature-selection |
0.6 |
Feature selection using XAI |
2024-07-13 20:08:15 |
lit-nlp |
1.2 |
🔥LIT: The Learning Interpretability Tool |
2024-06-26 16:32:41 |
inseq |
0.6.0 |
Interpretability for Sequence Generation Models 🔍 |
2024-04-13 13:37:37 |